Frequency probability is the interpretation of probability that defines an event's probability as the limit of its relative frequency in a large number of trials. The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. The shift from the classical view to the frequentist view represents a paradigm shift in the progression of statistical thought. This school is often associated with the names of Jerzy Neyman and Egon Pearson who described the logic of statistical hypothesis testing. Other influential figures of the frequentist school include John Venn, R.A. Fisher, and Richard von Mises.
Contents |
Frequentists talk about probabilities only when dealing with well-defined random experiments. The set of all possible outcomes of a random experiment is called the sample space of the experiment. An event is defined as a particular subset of the sample space that you want to consider. For any event only one of two possibilities can happen; it occurs or it does not occur. The relative frequency of occurrence of an event, in a number of repetitions of the experiment, is a measure of the probability of that event.
Thus, if is the total number of trials and is the number of trials where the event occurred, the probability of the event occurring will be approximated by the relative frequency as follows:
A further and more controversial claim is that in the "long run," as the number of trials approaches infinity, the relative frequency will converge exactly to the probability:[1]
One objection to this is that we can only ever observe a finite sequence, and thus the extrapolation to the infinite involves unwarranted metaphysical assumptions. This conflicts with the standard claim that the frequency interpretation is somehow more "objective" than other theories of probability.
This is a highly technical and scientific definition and doesn't claim to capture all connotations of the concept 'probable' in colloquial speech of natural languages. Compare how the concept of force is used by physicists in a precise manner despite the fact that force is also a concept in many natural languages, used in religious texts for example. However, this seldom causes problems or confusion, as the context usually reveals if it's the scientific concept that is intended or not.
As William Feller noted:
The frequentist view was arguably foreshadowed by Aristotle, in Rhetoric,[2] when he wrote:
the probable is that which for the most part happens[3]
It was given explicit statement by Robert Leslie Ellis in "On the Foundations of the Theory of Probabilities"[4] read on 14 February 1842,[2] (and much later again in "Remarks on the Fundamental Principles of the Theory of Probabilities"[5]). Antoine Augustin Cournot presented the same conception in 1843, in Exposition de la théorie des chances et des probabilités.[6]
Perhaps the first elaborate and systematic exposition was by John Venn, in The Logic of Chance: An Essay on the Foundations and Province of the Theory of Probability (1866, 1876, 1888).
According to the Oxford English Dictionary, the term 'frequentist' was first used[7] by M. G. Kendall [1] in 1949,[8] to contrast with Bayesians, whom he called "non-frequentists" (he cites Harold Jeffreys). He observed
See Probability interpretations
|